Starting Components

Lets take the following scenario: We have 2 client machines (lets call them cl1 and cl2), each of those client machines we copy from the DTF base directory the build/dist directory over to each of the client machines and any other machines we may be controlling with this test. You can also use the new ant task called deploy which will push the DTF build to any Linux box that you have ssh access to.

Now on one of the client machines (cl1) will start the DTFC by executing the ant.sh run_dtfc you should see the following output:

  > ./ant.sh run_dtfc
      Buildfile: build.xml
      init:
          [echo] Creating log dir logs/16-07-2008.10.11.52
          [mkdir] Created dir: /.../dtf/build/dtf/dist/logs/16-07-2008.10.11.52
      run_dtfc:
          [java] INFO  16/07/2008 10:11:53 DTFNode         - Starting dtfc component.
          [java] INFO  16/07/2008 10:11:53 Comm            - Host address [cl1]
          [java] INFO  16/07/2008 10:11:53 RPCServer       - Listening at 20000
          [java] INFO  16/07/2008 10:11:53 Comm            - DTFC Setup.
 

This means the DTFC is up and running and ready to manage DTFA connections on port 20000 of the machine it was executed on (this is the default but if for some reason port 20000 is taken then it will try to bind to the next available port and then you must make sure to use the -Ddtf.connect.port=X on the other components to tell them to connect to the non default port). There are some properties that can be used to control the which address/port the controller will bind to. They are dtf.listen.addr and dtf.listen.port and should be specified using the normal ant property definition with -Ddtf.listen.addr=XXX.

So now we can start an agent and have it connect to this already running DTFC. This is simple, from the same directory as before you can execute ant.sh run_dtfa but you need to tell it where to connect to if you're running on another machine like so:

  >./ant.sh run_dtfa -Ddtf.connect.addr=cl1
  Buildfile: build.xml
      init:
          [echo] Creating log dir logs/16-07-2008.09.46.10
          [mkdir] Created dir: /.../dtf/logs/16-07-2008.09.46.10
      run_dtfa:
          [echo] Starting DTFA
          [echo] logging to logs/16-07-2008.09.46.10/dtfa.*.log
          [java] INFO  16/07/2008 09:46:12 DTFNode         - Starting dtfa component.
          [java] INFO  16/07/2008 09:46:13 Comm            - Host address [68.180.200.245]
          [java] INFO  16/07/2008 09:46:13 RPCServer       - Listening at 0.0.0.0:20000
          [java] INFO  16/07/2008 09:46:13 Comm            - DTFA Setup.
          [java] INFO  16/07/2008 09:46:13 CommClient      - Registering component at http://10.73.144.127:20000/xmlrpc
          [java] INFO  16/07/2008 09:46:15 Rename          - Node name set to: dtfa-0
          [java] INFO  16/07/2008 09:46:15 CommClient      - Connected to DTFC
 

Once we see the “Connected to DTFC” line we know that this DTFA is connected to the controller and ready to tasks to execute. Now having executed one of these DTFA's from cl1 itself and another from cl2.

Now that we have the framework up and running we can use the DTFX to execute a given test case within the current DTF setup. Executing the DTFX is done like so:

  ./ant.sh run_dtfx -Ddtf.xml.filename=tests/ut/echo.xml -Ddtf.connect.addr=cl1
  Buildfile: build.xml
      init:
          [echo] Creating log dir logs/16-07-2008.10.23.07 
          [mkdir] Created dir: /.../dist/logs/16-07-2008.10.23.07
      run_dtfx:
          [java] INFO  16/07/2008 10:23:08 DTFNode         - Starting dtfx component.
          [java] INFO  16/07/2008 10:23:08 InitHook        - Example InitHook got called.
          [java] INFO  16/07/2008 10:23:08 InitPlugins     - Called init for plugin [ydht_dtf.jar].
          [java] INFO  16/07/2008 10:23:08 Comm            - Host address [cl1]
          [java] INFO  16/07/2008 10:23:08 RPCServer       - Listening at 0.0.0.0:20001
          [java] INFO  16/07/2008 10:23:08 Comm            - DTFX Setup.
          [java] INFO  16/07/2008 10:23:09 Createstorage   - Creating storage: INPUT
          [java] INFO  16/07/2008 10:23:09 Echo            - Echoing a message on the component DTFA1
 

So as simple as that we have execute the echo.xml test case from the DTF unit tests against the current DTF setup. Which only really needed one DTFA if you look at the output from the test case run we only lock on dtfa of the type dtfa and use that to do a remote echo. Here's what the remote echo looks like on the agent side:

  [java] INFO  16/07/2008 09:46:15 Rename          - Node name set to: dtfa-0
  [java] INFO  16/07/2008 09:46:15 CommClient      - Connected to DTFC
  [java] INFO  16/07/2008 09:46:35 CommClient      - Remote echo on component DTFA1
 

Now as you've probably noticed the components don't have to be on separate machines. This fact is used to unit test the framework from a single machine by starting all of the components DTFA, DTFC and DTFX on the same machine and running the necessary unit tests against this DTF setup. Now for other purposes such as performance testing we obviously need separate machines to be able to stress the product being tested correctly.